快速的现场评估(ROSE)技术可以通过适当地分析快速染色的细胞病理学图像来显着加速胰腺癌的诊断。计算机辅助诊断(CAD)可以潜在地解决玫瑰病中病理学家的短缺。但是,不同样品之间的癌性模式差异很大,这使CAD任务极具挑战性。此外,由于不同的染色质量和各种采集装置类型,玫瑰图像在颜色分布,亮度和对比度方面具有复杂的扰动。为了应对这些挑战,我们提出了一种基于随机实例的视觉变压器(SI-VIT)方法,该方法可以减少扰动并增强实例之间的建模。借助重新组装的洗牌实例及其行李级软标签,该方法利用回归头将模型集中在细胞上,而不是各种扰动。同时,该模型与分类头结合在一起,可以有效地识别不同实例之间的一般分布模式。结果表明,分类准确性有了更准确的注意区域的显着提高,表明玫瑰图像的多种模式有效地提取了,并且复杂的扰动大大降低。这也表明SI-VIT在分析细胞病理学图像方面具有巨大的潜力。代码和实验结果可在https://github.com/sagizty/mil-si上获得。
translated by 谷歌翻译
胰腺癌是世界上最严重恶性的癌症之一,这种癌症迅速迅速,具有很高的死亡率。快速的现场评估(玫瑰)技术通过立即分析与现场病理学家的快速染色的细胞影析学形象来创新工作流程,这使得在这种紧压的过程中能够更快的诊断。然而,由于缺乏经验丰富的病理学家,玫瑰诊断的更广泛的扩张已经受到阻碍。为了克服这个问题,我们提出了一个混合高性能深度学习模型,以实现自动化工作流程,从而释放占据病理学家的宝贵时间。通过使用我们特定的多级混合设计将变压器块引入该字段,由卷积神经网络(CNN)产生的空间特征显着增强了变压器全球建模。转向多级空间特征作为全球关注指导,这种设计将鲁棒性与CNN的感应偏差与变压器的复杂全球建模功能相结合。收集4240朵Rose图像的数据集以评估此未开发领域的方法。所提出的多级混合变压器(MSHT)在分类精度下实现95.68%,其鲜明地高于最先进的模型。面对对可解释性的需求,MSHT以更准确的关注区域表达其对应物。结果表明,MSHT可以以前所未有的图像规模精确地区分癌症样本,奠定了部署自动决策系统的基础,并在临床实践中扩大玫瑰。代码和记录可在:https://github.com/sagizty/multi-stage-ybrid-transformer。
translated by 谷歌翻译
未配对的图像到图像转换的目标是产生反映目标域样式的输出图像,同时保持输入源图像的不相关内容不变。但是,由于缺乏对现有方法的内容变化的关注,来自源图像的语义信息遭受翻译期间的降级。在论文中,为了解决这个问题,我们介绍了一种新颖的方法,全局和局部对齐网络(GLA-NET)。全局对齐网络旨在将输入图像从源域传输到目标域。要有效地这样做,我们通过使用MLP-MILLER基于MATY编码器将多元高斯分布的参数(均值和标准偏差)作为样式特征学习。要更准确地传输样式,我们在编码器中使用自适应实例归一化层,具有目标多功能高斯分布的参数作为输入。我们还采用正常化和可能性损失,以进一步降低领域差距并产生高质量的产出。另外,我们介绍了局部对准网络,该网络采用预磨平的自我监督模型来通过新颖的局部对准丢失来产生注意图,确保翻译网络专注于相关像素。在五个公共数据集上进行的广泛实验表明,我们的方法有效地产生比现有方法更锐利和更现实的图像。我们的代码可在https://github.com/ygjwd12345/glanet获得。
translated by 谷歌翻译
在自动驾驶中,学习可以适应各种环境条件的分割模型至关重要。特别是,具有严重的照明变化的复制是一种推动的需求,因为在日光数据上培训的模型将在夜间训练。在本文中,我们研究了域自适应夜间语义分割(DANS)的问题,旨在学习具有标有日间数据集和未标记的数据集的判别夜间模型,包括粗略对齐的日夜图像对。为此,我们提出了一种新的双向混合(Bi-Mix)框架,用于疏浚,这可以有助于图像平移和分割适应过程。具体地,在图像翻译阶段中,Bi-Mix利用日夜图像对的知识来提高夜间图像致密的质量。另一方面,在分段适应阶段,双混合有效地桥接白天和夜间域之间的分布差距,以使模型适应夜间域。在这两个过程中,双混合简单地通过混合两个样本而无需额外的超参数来操作,因此易于实施。暗苏黎世和夜间驾驶数据集的广泛实验展示了所提出的双组合的优势,并表明我们的方法在丹盘中获得最先进的表现。我们的代码可在https://github.com/ygjwd12345/bimix上获得。
translated by 谷歌翻译
卷积神经网络在寻址像素级预测任务中的主要进展,例如语义分割,深度估计,表面正常预测等,从他们的强大功能中受益于视觉表现学习。通常,本领域模型的状态集成了对改进的深度特征表示的关注机制。最近,一些作品已经证明了学习的重要性,并结合了深度特征细化的空间和通道介绍。在本文中,WEAIM在有效地提升之前的方法和提出统一的深度框架,以便以原则的方式共同学习空间注意图和信道注意矢量,以便构建由此两种类型的注意力之间的引起的张量和模型相互作用。具体地,我们将估计和相互作用集成了概率表示学习框架内的关注,导致变分结构注意网络(Vista-net)。我们在神经网络内实现推理规则,从而允许概率的端到端学习和CNN前端参数。正如我们对六个大型数据集的大量实证评估所证明的致密视觉预测,Vista-Net在多个连续和离散预测任务中优于最先进的,从而确认在联合结构空间中提出的方法的益处 - 深度代表学习的关注估计。该代码可在https://github.com/ygjwd12345/vista-ner上获得。
translated by 谷歌翻译
Masked image modeling (MIM) performs strongly in pre-training large vision Transformers (ViTs). However, small models that are critical for real-world applications cannot or only marginally benefit from this pre-training approach. In this paper, we explore distillation techniques to transfer the success of large MIM-based pre-trained models to smaller ones. We systematically study different options in the distillation framework, including distilling targets, losses, input, network regularization, sequential distillation, etc, revealing that: 1) Distilling token relations is more effective than CLS token- and feature-based distillation; 2) An intermediate layer of the teacher network as target perform better than that using the last layer when the depth of the student mismatches that of the teacher; 3) Weak regularization is preferred; etc. With these findings, we achieve significant fine-tuning accuracy improvements over the scratch MIM pre-training on ImageNet-1K classification, using all the ViT-Tiny, ViT-Small, and ViT-base models, with +4.2%/+2.4%/+1.4% gains, respectively. Our TinyMIM model of base size achieves 52.2 mIoU in AE20K semantic segmentation, which is +4.1 higher than the MAE baseline. Our TinyMIM model of tiny size achieves 79.6% top-1 accuracy on ImageNet-1K image classification, which sets a new record for small vision models of the same size and computation budget. This strong performance suggests an alternative way for developing small vision Transformer models, that is, by exploring better training methods rather than introducing inductive biases into architectures as in most previous works. Code is available at https://github.com/OliverRensu/TinyMIM.
translated by 谷歌翻译
In this paper, we propose a robust 3D detector, named Cross Modal Transformer (CMT), for end-to-end 3D multi-modal detection. Without explicit view transformation, CMT takes the image and point clouds tokens as inputs and directly outputs accurate 3D bounding boxes. The spatial alignment of multi-modal tokens is performed implicitly, by encoding the 3D points into multi-modal features. The core design of CMT is quite simple while its performance is impressive. CMT obtains 73.0% NDS on nuScenes benchmark. Moreover, CMT has a strong robustness even if the LiDAR is missing. Code will be released at https://github.com/junjie18/CMT.
translated by 谷歌翻译
Dataset distillation has emerged as a prominent technique to improve data efficiency when training machine learning models. It encapsulates the knowledge from a large dataset into a smaller synthetic dataset. A model trained on this smaller distilled dataset can attain comparable performance to a model trained on the original training dataset. However, the existing dataset distillation techniques mainly aim at achieving the best trade-off between resource usage efficiency and model utility. The security risks stemming from them have not been explored. This study performs the first backdoor attack against the models trained on the data distilled by dataset distillation models in the image domain. Concretely, we inject triggers into the synthetic data during the distillation procedure rather than during the model training stage, where all previous attacks are performed. We propose two types of backdoor attacks, namely NAIVEATTACK and DOORPING. NAIVEATTACK simply adds triggers to the raw data at the initial distillation phase, while DOORPING iteratively updates the triggers during the entire distillation procedure. We conduct extensive evaluations on multiple datasets, architectures, and dataset distillation techniques. Empirical evaluation shows that NAIVEATTACK achieves decent attack success rate (ASR) scores in some cases, while DOORPING reaches higher ASR scores (close to 1.0) in all cases. Furthermore, we conduct a comprehensive ablation study to analyze the factors that may affect the attack performance. Finally, we evaluate multiple defense mechanisms against our backdoor attacks and show that our attacks can practically circumvent these defense mechanisms.
translated by 谷歌翻译
Blind image quality assessment (BIQA) remains challenging due to the diversity of distortion and image content variation, which complicate the distortion patterns crossing different scales and aggravate the difficulty of the regression problem for BIQA. However, existing BIQA methods often fail to consider multi-scale distortion patterns and image content, and little research has been done on learning strategies to make the regression model produce better performance. In this paper, we propose a simple yet effective Progressive Multi-Task Image Quality Assessment (PMT-IQA) model, which contains a multi-scale feature extraction module (MS) and a progressive multi-task learning module (PMT), to help the model learn complex distortion patterns and better optimize the regression issue to align with the law of human learning process from easy to hard. To verify the effectiveness of the proposed PMT-IQA model, we conduct experiments on four widely used public datasets, and the experimental results indicate that the performance of PMT-IQA is superior to the comparison approaches, and both MS and PMT modules improve the model's performance.
translated by 谷歌翻译
Automatic music generation with artificial intelligence typically requires a large amount of data which is hard to obtain for many less common genres and musical instruments. To tackle this issue, we present ongoing work and preliminary findings on the possibility for deep models to transfer knowledge from language to music, by finetuning large language models pre-trained on a massive text corpus on only hundreds of MIDI files of drum performances. We show that by doing so, one of the largest, state-of-the-art models (GPT3) is capable of generating reasonable drum grooves, while models that are not pre-trained (Transformer) shows no such ability beyond naive repetition. Evaluating generated music is a challenging task, more so is evaluating drum grooves with little precedence in literature. Hence, we propose a tailored structural evaluation method and analyze drum grooves produced by GPT3 compared to those played by human professionals, exposing the strengths and weaknesses of such generation by language-to-music transfer. Our findings suggest that language-to-music transfer learning with large language models is viable and promising.
translated by 谷歌翻译